18 research outputs found

    SearchMorph:Multi-scale Correlation Iterative Network for Deformable Registration

    Full text link
    Deformable image registration can obtain dynamic information about images, which is of great significance in medical image analysis. The unsupervised deep learning registration method can quickly achieve high registration accuracy without labels. However, these methods generally suffer from uncorrelated features, poor ability to register large deformations and details, and unnatural deformation fields. To address the issues above, we propose an unsupervised multi-scale correlation iterative registration network (SearchMorph). In the proposed network, we introduce a correlation layer to strengthen the relevance between features and construct a correlation pyramid to provide multi-scale relevance information for the network. We also design a deformation field iterator, which improves the ability of the model to register details and large deformations through the search module and GRU while ensuring that the deformation field is realistic. We use single-temporal brain MR images and multi-temporal echocardiographic sequences to evaluate the model's ability to register large deformations and details. The experimental results demonstrate that the method in this paper achieves the highest registration accuracy and the lowest folding point ratio using a short elapsed time to state-of-the-art

    Prospective assessment of breast lesions AI classification model based on ultrasound dynamic videos and ACR BI-RADS characteristics

    Get PDF
    IntroductionAI-assisted ultrasound diagnosis is considered a fast and accurate new method that can reduce the subjective and experience-dependent nature of handheld ultrasound. In order to meet clinical diagnostic needs better, we first proposed a breast lesions AI classification model based on ultrasound dynamic videos and ACR BI-RADS characteristics (hereafter, Auto BI-RADS). In this study, we prospectively verify its performance.MethodsIn this study, the model development was based on retrospective data including 480 ultrasound dynamic videos equivalent to 18122 static images of pathologically proven breast lesions from 420 patients. A total of 292 breast lesions ultrasound dynamic videos from the internal and external hospital were prospectively tested by Auto BI-RADS. The performance of Auto BI-RADS was compared with both experienced and junior radiologists using the DeLong method, Kappa test, and McNemar test.ResultsThe Auto BI-RADS achieved an accuracy, sensitivity, and specificity of 0.87, 0.93, and 0.81, respectively. The consistency of the BI-RADS category between Auto BI-RADS and the experienced group (Kappa:0.82) was higher than that of the juniors (Kappa:0.60). The consistency rates between Auto BI-RADS and the experienced group were higher than those between Auto BI-RADS and the junior group for shape (93% vs. 80%; P = .01), orientation (90% vs. 84%; P = .02), margin (84% vs. 71%; P = .01), echo pattern (69% vs. 56%; P = .001) and posterior features (76% vs. 71%; P = .0046), While the difference of calcification was not significantly different.DiscussionIn this study, we aimed to prospectively verify a novel AI tool based on ultrasound dynamic videos and ACR BI-RADS characteristics. The prospective assessment suggested that the AI tool not only meets the clinical needs better but also reaches the diagnostic efficiency of experienced radiologists

    A supervised blood vessel segmentation technique for digital Fundus images using Zernike Moment based features

    Get PDF
    This paper proposes a new supervised method for blood vessel segmentation using Zernike moment-based shape descriptors. The method implements a pixel wise classification by computing a 11-D feature vector comprising of both statistical (gray-level) features and shape-based (Zernike moment) features. Also the feature set contains optimal coefficients of the Zernike Moments which were derived based on the maximum differentiability between the blood vessel and background pixels. A manually selected training points obtained from the training set of the DRIVE dataset, covering all possible manifestations were used for training the ANN-based binary classifier. The method was evaluated on unknown test samples of DRIVE and STARE databases and returned accuracies of 0.945 and 0.9486 respectively, outperforming other existing supervised learning methods. Further, the segmented outputs were able to cover thinner blood vessels better than previous methods, aiding in early detection of pathologies

    FPGA-based systolic deconvolution architecture for upsampling

    Get PDF
    A deconvolution accelerator is proposed to upsample n × n input to 2n × 2n output by convolving with a k × k kernel. Its architecture avoids the need for insertion and padding of zeros and thus eliminates the redundant computations to achieve high resource efficiency with reduced number of multipliers and adders. The architecture is systolic and governed by a reference clock, enabling the sequential placement of the module to represent a pipelined decoder framework. The proposed accelerator is implemented on a Xilinx XC7Z020 platform, and achieves a performance of 3.641 giga operations per second (GOPS) with resource efficiency of 0.135 GOPS/DSP for upsampling 32 × 32 input to 256 × 256 output using a 3 × 3 kernel at 200 MHz. Furthermore, its high peak signal to noise ratio of almost 80 dB illustrates that the upsampled outputs of the bit truncated accelerator are comparable to IEEE double precision results

    Myocardial strain analysis of echocardiography based on deep learning

    Get PDF
    BackgroundStrain analysis provides more thorough spatiotemporal signatures for myocardial contraction, which is helpful for early detection of cardiac insufficiency. The use of deep learning (DL) to automatically measure myocardial strain from echocardiogram videos has garnered recent attention. However, the development of key techniques including segmentation and motion estimation remains a challenge. In this work, we developed a novel DL-based framework for myocardial segmentation and motion estimation to generate strain measures from echocardiogram videos.MethodsThree-dimensional (3D) Convolutional Neural Network (CNN) was developed for myocardial segmentation and optical flow network for motion estimation. The segmentation network was used to define the region of interest (ROI), and the optical flow network was used to estimate the pixel motion in the ROI. We performed a model architecture search to identify the optimal base architecture for motion estimation. The final workflow design and associated hyperparameters are the result of a careful implementation. In addition, we compared the DL model with a traditional speck tracking algorithm on an independent, external clinical data. Each video was double-blind measured by an ultrasound expert and a DL expert using speck tracking echocardiography (STE) and DL method, respectively.ResultsThe DL method successfully performed automatic segmentation, motion estimation, and global longitudinal strain (GLS) measurements in all examinations. The 3D segmentation has better spatio-temporal smoothness, average dice correlation reaches 0.82, and the effect of target frame is better than that of previous 2D networks. The best motion estimation network achieved an average end-point error of 0.05 ± 0.03 mm per frame, better than previously reported state-of-the-art. The DL method showed no significant difference relative to the traditional method in GLS measurement, Spearman correlation of 0.90 (p < 0.001) and mean bias −1.2 ± 1.5%.ConclusionIn conclusion, our method exhibits better segmentation and motion estimation performance and demonstrates the feasibility of DL method for automatic strain analysis. The DL approach helps reduce time consumption and human effort, which holds great promise for translational research and precision medicine efforts

    Automatic Segmentation of Left Ventricle in Echocardiography Based on YOLOv3 Model to Achieve Constraint and Positioning

    No full text
    Cardiovascular disease (CVD) is the most common type of disease and has a high fatality rate in humans. Early diagnosis is critical for the prognosis of CVD. Before using myocardial tissue strain, strain rate, and other indicators to evaluate and analyze cardiac function, accurate segmentation of the left ventricle (LV) endocardium is vital for ensuring the accuracy of subsequent diagnosis. For accurate segmentation of the LV endocardium, this paper proposes the extraction of the LV region features based on the YOLOv3 model to locate the positions of the apex and bottom of the LV, as well as that of the LV region; thereafter, the subimages of the LV can be obtained, and based on the Markov random field (MRF) model, preliminary identification and binarization of the myocardium of the LV subimages can be realized. Finally, under the constraints of the three aforementioned positions of the LV, precise segmentation and extraction of the LV endocardium can be achieved using nonlinear least-squares curve fitting and edge approximation. The experiments show that the proposed segmentation evaluation indices of the method, including computation speed (fps), Dice, mean absolute distance (MAD), and Hausdorff distance (HD), can reach 2.1–2.25 fps, 93.57±1.97%, 2.57±0.89 mm, and 6.68±1.78 mm, respectively. This indicates that the suggested method has better segmentation accuracy and robustness than existing techniques

    Multi-Features-Based Automated Breast Tumor Diagnosis Using Ultrasound Image and Support Vector Machine

    No full text
    Breast ultrasound examination is a routine, fast, and safe method for clinical diagnosis of breast tumors. In this paper, a classification method based on multi-features and support vector machines was proposed for breast tumor diagnosis. Multi-features are composed of characteristic features and deep learning features of breast tumor images. Initially, an improved level set algorithm was used to segment the lesion in breast ultrasound images, which provided an accurate calculation of characteristic features, such as orientation, edge indistinctness, characteristics of posterior shadowing region, and shape complexity. Simultaneously, we used transfer learning to construct a pretrained model as a feature extractor to extract the deep learning features of breast ultrasound images. Finally, the multi-features were fused and fed to support vector machine for the further classification of breast ultrasound images. The proposed model, when tested on unknown samples, provided a classification accuracy of 92.5% for cancerous and noncancerous tumors

    Fusion of ANNs as decoder of retinal spike trains for scene reconstruction

    No full text
    The retina is one of the most developed sensing organs in the hu- man body. However, the knowledge on the coding and decoding of the retinal neurons are still rather limited. Compared with coding (i.e., transforming vi- sual scenes to retinal spike trains), the decoding (i.e., reconstructing visual scenes from spike trains, especially those of complex stimuli) is more complex and receives less attention. In this paper, we focus on the accurate reconstruc- tion of visual scenes from their spike trains by designing a retinal spike train decoder based on the combination of the Fully Connected Network (FCN), Capsule Network (CapsNet) and Convolutional Neural Network (CNN), and a loss function incorporating the structural similarity index measure (SSIM) and L1 loss. CapsNet is used to extract the features from the spike trains, that are fused with the original spike trains and used as the inputs to FCN and CNN to facilitate the scene reconstruction. The feasibility and superiority of our model are evaluated on five datasets (i.e., MNIST, Fashion-MNIST, Cifar- 10, Celeba-HQ andCOCO). The model is evaluated quantitatively with four image evaluation indices, i.e., SSIM, MSE, PSNR and Intra-SSIM. The results show that the model provides a new means for decoding visual scene stim- uli from retinal spike trains, and promotes the development of brain-machine interfaces

    Sentiment Analysis of Chinese Product Reviews Based on Fusion of DUAL-Channel BiLSTM and Self-Attention

    No full text
    Product reviews provide crucial information for both consumers and businesses, offering insights needed before purchasing a product or service. However, existing sentiment analysis methods, especially for Chinese language, struggle to effectively capture contextual information due to the complex semantics, multiple sentiment polarities, and long-term dependencies between words. In this paper, we propose a sentiment classification method based on the BiLSTM algorithm to address these challenges in natural language processing. Self-Attention-CNN BiLSTM (SAC-BiLSTM) leverages dual channels to extract features from both character-level embeddings and word-level embeddings. It combines BiLSTM and Self-Attention mechanisms for feature extraction and weight allocation, aiming to overcome the limitations in mining contextual information. Experiments were conducted on the onlineshopping10cats dataset, which is a standard corpus of e-commerce shopping reviews available in the ChineseNlpCorpus 2018. The experimental results demonstrate the effectiveness of our proposed algorithm, with Recall, Precision, and F1 scores reaching 0.9409, 0.9369, and 0.9404, respectively

    A Multi-Sensor System for Silkworm Cocoon Gender Classification via Image Processing and Support Vector Machine

    No full text
    Sericulture is traditionally a labor-intensive rural-based industry. In modern contexts, the development of process automation faces new challenges related to quality and efficiency. During the silkworm farming life cycle, a common issue is represented by the gender classification of the cocoons. Improper cocoon separation negatively affects quantity and quality of the yield resulting in disruptive bottlenecks for the productivity. To tackle this issue, this paper proposes a multi sensor system for silkworm cocoons gender classification and separation. Utilizing a load sensor and a digital camera, the system acquires weight and digital images from individual silkworm cocoons. An image processing procedure is then applied to extract significant shape-related features from each image instance, which, combined with the weight data, are provided as inputs to train a Support Vector Machine-based pattern classifier for gender classification. Subsequently, an air blower mechanism and a conveyor system sort the cocoons into their respective bins. The developed system was trained and tested on two different types of silkworm cocoons breeds, respectively CSR2 and Pure Mysore. The system performances are finally discussed in terms of accuracy, robustness and computation time
    corecore